映射人员动态是一项至关重要的技能,因为它使机器人能够在人居住的环境中共存。但是,学习人动态模型是一个耗时的过程,需要观察大量在环境中移动的人。此外,映射动力学的方法无法跨环境传输学习的模型:每个模型只能描述其所内置的环境的动力学。但是,可以使用建筑几何形状对人运动的影响来估计其动态,最近的工作旨在从几何学学习动态图。但是,到目前为止,这些方法仅在小型合成数据上评估了它们的性能,从而使这些方法的实际能力概括为实际条件,但未探索。在这项工作中,我们提出了一种新颖的方法,可以从几何学中学习人的动态,在大规模环境中,对模型进行了训练和评估。然后,我们展示了我们的方法概括到看不见的环境的能力,这对于动态图是前所未有的。
translated by 谷歌翻译
We apply the Hierarchical Autoregressive Neural (HAN) network sampling algorithm to the two-dimensional $Q$-state Potts model and perform simulations around the phase transition at $Q=12$. We quantify the performance of the approach in the vicinity of the first-order phase transition and compare it with that of the Wolff cluster algorithm. We find a significant improvement as far as the statistical uncertainty is concerned at a similar numerical effort. In order to efficiently train large neural networks we introduce the technique of pre-training. It allows to train some neural networks using smaller system sizes and then employing them as starting configurations for larger system sizes. This is possible due to the recursive construction of our hierarchical approach. Our results serve as a demonstration of the performance of the hierarchical approach for systems exhibiting bimodal distributions. Additionally, we provide estimates of the free energy and entropy in the vicinity of the phase transition with statistical uncertainties of the order of $10^{-7}$ for the former and $10^{-3}$ for the latter based on a statistics of $10^6$ configurations.
translated by 谷歌翻译
社会机器人的快速发展刺激了人类运动建模,解释和预测,主动碰撞,人类机器人相互作用和共享空间中共同损害的积极研究。现代方法的目标需要高质量的数据集进行培训和评估。但是,大多数可用数据集都遭受了不准确的跟踪数据或跟踪人员的不自然的脚本行为。本文试图通过在语义丰富的环境中提供运动捕获,眼睛凝视跟踪器和板载机器人传感器的高质量跟踪信息来填补这一空白。为了诱导记录参与者的自然行为,我们利用了松散的脚本化任务分配,这使参与者以自然而有目的的方式导航到动态的实验室环境。本文介绍的运动数据集设置了高质量的标准,因为使用语义信息可以增强现实和准确的数据,从而使新算法的开发不仅依赖于跟踪信息,而且还依赖于移动代理的上下文提示,还依赖于跟踪信息。静态和动态环境。
translated by 谷歌翻译
近年来,事件摄像机(DVS - 动态视觉传感器)已在视觉系统中用作传统摄像机的替代或补充。它们的特征是高动态范围,高时间分辨率,低潜伏期和在有限的照明条件下可靠的性能 - 在高级驾驶员辅助系统(ADAS)和自动驾驶汽车的背景下,参数尤为重要。在这项工作中,我们测试这些相当新颖的传感器是否可以应用于流行的交通标志检测任务。为此,我们分析事件数据的不同表示:事件框架,事件频率和指数衰减的时间表面,并使用称为FireNet的深神经网络应用视频框架重建。我们将深度卷积神经网络Yolov4用作检测器。对于特定表示,我们获得了86.9-88.9%map@0.5的检测准确性。使用融合所考虑的表示形式的使用使我们能够获得更高准确性的检测器89.9%map@0.5。相比之下,用Firenet重建的框架的检测器的特征是52.67%map@0.5。获得的结果说明了汽车应用中事件摄像机的潜力,无论是独立传感器还是与典型的基于框架的摄像机密切合作。
translated by 谷歌翻译
复杂的推理问题包含确定良好行动计划所需的计算成本各不相同的状态。利用此属性,我们提出了自适应亚go搜索(ADASUBS),这是一种适应性地调整计划范围的搜索方法。为此,ADASUBS在不同距离上产生了不同的子目标。采用验证机制来迅速滤除无法到达的子目标,从而使人专注于可行的进一步子目标。通过这种方式,ADASUBS受益于计划的效率更长的子目标,以及对较短的计划的良好控制。我们表明,ADASUB在三个复杂的推理任务上大大超过了层次规划算法:Sokoban,The Rubik的Cube和不平等现象证明了基准INT,为INT设定了新的最先进。
translated by 谷歌翻译
我们提供了对神经马尔可夫链蒙特卡罗模拟中的自相关的深度研究,该版本的传统大都会算法采用神经网络来提供独立的建议。我们使用二维ising模型说明了我们的想法。我们提出了几次自相关时间的估算,其中一些灵感来自于为大都市独立采样器导出的分析结果,我们将其与逆温度$ \ Beta $的函数进行比较和研究。基于我们提出替代损失功能,并研究其对自动系列的影响。此外,我们调查对自动相关时间的神经网络培训过程中强加系统对称($ Z_2 $和/或翻译)的影响。最终,我们提出了一种包含局部热浴更新的方案。讨论了上述增强功能的影响为16美元16美元旋转系统。我们的调查结果摘要可以作为实施更复杂模型的神经马尔可夫链蒙特卡罗模拟的指导。
translated by 谷歌翻译
如果复杂信号可以表示为更简单的子部分的组合,通信是组成的。在本文中,理论上,理论上表明需要在训练框架和数据上进行归纳偏差来发展组成通信。此外,我们证明了在信令游戏中自发地出现的构思性,其中代理通过嘈杂的频道进行通信。我们通过实验证实了一系列噪声水平,这取决于模型和数据,确实促进了组成性。最后,我们在最近研究的组成度量:地形相似性,冲突计数和情境独立方面提供了对这一依赖性和报告结果的全面研究。
translated by 谷歌翻译
Audio DeepFakes are artificially generated utterances created using deep learning methods with the main aim to fool the listeners, most of such audio is highly convincing. Their quality is sufficient to pose a serious threat in terms of security and privacy, such as the reliability of news or defamation. To prevent the threats, multiple neural networks-based methods to detect generated speech have been proposed. In this work, we cover the topic of adversarial attacks, which decrease the performance of detectors by adding superficial (difficult to spot by a human) changes to input data. Our contribution contains evaluating the robustness of 3 detection architectures against adversarial attacks in two scenarios (white-box and using transferability mechanism) and enhancing it later by the use of adversarial training performed by our novel adaptive training method.
translated by 谷歌翻译
This short report reviews the current state of the research and methodology on theoretical and practical aspects of Artificial Neural Networks (ANN). It was prepared to gather state-of-the-art knowledge needed to construct complex, hypercomplex and fuzzy neural networks. The report reflects the individual interests of the authors and, by now means, cannot be treated as a comprehensive review of the ANN discipline. Considering the fast development of this field, it is currently impossible to do a detailed review of a considerable number of pages. The report is an outcome of the Project 'The Strategic Research Partnership for the mathematical aspects of complex, hypercomplex and fuzzy neural networks' meeting at the University of Warmia and Mazury in Olsztyn, Poland, organized in September 2022.
translated by 谷歌翻译
Petrov-Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. Nevertheless, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently, for a given set of parameters, in an online stage. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE-parameter, to the matrix of coefficients of optimal test functions (in a basis expansion) associated with that PDE-parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE-parameters). When solving online the resulting (compressed) Petrov-Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix-vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure as fast as the original (unstable) Galerkin approach. In other words, we get the stabilization with hierarchical matrices and neural networks practically for free. We illustrate our findings by means of 2D Eriksson-Johnson and Hemholtz model problems.
translated by 谷歌翻译